Process variation dimension reduction based on SVD
نویسندگان
چکیده
We propose an algorithm based on singular value decomposition (SVD) to reduce the number of process variation variables. With few process variation variables, fault simulation and timing analysis under process variation can be performed efficiently. Our algorithm reduces the number of process variation variables while preserving the delay function with respect to process variation. Compared with the principal component analysis (PCA) method, our algorithm requires less computation time and guarantees the reduced process variation variables are independent. Experimental results on ISCAS85 circuits show that the algorithm works well. 1 INTROCUTION In modern multi-layer VLSI circuits, the dimension of the process variation is large. For example, in a k-metal technology, there are 2k variables for metal width and thickness, k variables for inter-layer dielectric (ILD) thickness, a gate length variable, and some other variables. With such a large number of process variation variables, timing analysis under process variation is very time consuming, and may not find the worst-case process corner [1][2]. In this paper, we propose the novel concept of variation dimension reduction. The idea is originated from the following observation. For different nets, the underlying process variation variables are the same. Therefore, the delays of the nets are correlated. We model the delay from an input pin of a cell to an input pin of a downstream cell, also known as the buffer-to-buffer delay, as a linear function of process variation variables: d = dnominal(1 + a1x1 + a2x2 + ... + anxn), where x1, x2, ..., xn are process variation variables, a1, a2, ..., an are their corresponding coefficients, and dnominal is the nominal delay. For different nets, the coefficients are different, but the process variation variables are the same. The assumption that the delay is linear with respect to each process variation variable is supported by SPICE simulation of real circuits within reasonable ranges of the process variation [3]. There are many forms of process variation [4][5]. In this paper, we only consider systematic process variation and assume the process variation variables among different nets are fully correlated. We assume the following variation ranges: metal thickness ±20%, metal width ±10%, ILD thickness ±40%, and gate length ±10%. To obtain the delay function with respect to process variation, we first extract interconnect parasitic from the layout and compute the nominal buffer-to-buffer delay for all nets. Then for each process variation variable, we modify the layout or the technology file by varying a specified amount (for example, +10% for metal1 thickness), and perform RC extraction and delay evaluation for the modified layout. The coefficient for this process variation variable is obtained by interpolating the two delays. Recently, the principal component analysis (PCA) method was used to reduce the dimension of process variation [7]. It was shown to be effective for analyzing analog circuits under process variation. However, the PCA method is inefficient for digital circuits, because the PCA method constructs and solves a correlation matrix of size mn×mn, where n is the number of process variation variables and m is the number of nets or devices. For digital circuits, m is easily in hundreds or thousands. In this paper, we propose a new method based on singular value decomposition (SVD) [6]. Our algorithm detects the linear dependence and reduces the number of process variation variables, while minimizing the average error of the delay. Note that although the PCA method also uses SVD, their matrix is size mn×mn while ours is size m×n. To make the reduced process variables linearly/statistically independent, which is desirable for fault simulation and timing analysis, we also propose a sub-matrix SVD method. In Section 2, the SVD method and sub-matrix SVD method for dimension reduction are given. In Section 3, we show the experimental results. It shows that if we simply pick the most important process variation variables and discard the rest, then the average error of net delay is twice as much as the average error produced by our algorithm. 2 ALGORITHM 2.1 Singular Value Decomposition (SVD) The SVD method is based on the following theorems [6]. Theorem 1 If A is an m×n matrix, then there exist orthogonal m×m matrix U=[u1,...,um] and n×n matrix V=[v1,...,vn], such that U·A·V=diag(σ1,...,σp), which is an m×n matrix, p=min{m,n}, and σ1 ≥ σ2 ≥...≥ σp ≥ 0. The σi’s are the singular values of A, and vectors ui and vi are the ith left and right singular vector, respectively. The process to find U and V is called SVD. It can be done by the Golub-Reinsch algorithm of time complexity O(mn+mn+n) [6] (pp. 254). To compute U(:,1:n) =[u1,...,un], which is an m×n (m>n) matrix, the time complexity is O(mn+n) [6]. Theorem 2 Let the SVD of m×n matrix A be defined in Theorem 1. If r < rank(A) and let ∑ = = r i T i i i v u A 1 * σ , then 1 2 * 2 ) ( min + = = − = − r r B rank A A B A σ . In other words, A is the best 2-norm approximation of A among all rank r matrices. 2.2 Application to Dimension Reduction We are given the delay of m nets represented as Ax, where A is an m×n matrix, n is the number of process variation variables, x=[x1,...,xn] is the vector of process variation variables, and xi’s are independent of each other. From Theorem 2, A=U(:,1:r)·S·V(:,1:r), where S=diag(σ1,...,σr), U(:,1:r)=[u1,...,ur], and V(:,1:r) =[v1,...,vr]. Therefore, Ax≈Ax=U(:,1:r)·S·V(:,1:r)x=Bz, where B=U(:,1:r)·S is an m×r matrix and z=V(:,1:r)x is an r×1 vector. Since the dimension of B is much less than the dimension of A, we use Bz to approximate Ax. If we just want to reduce the number of process variation variables, then Bz is the resulting delay function, and z is the vector of new process variation variables. However, we often have an additional requirement from circuit simulation that the new process variation variables z1, ..., zr must be independent. For example, if z1=x1−x2+x3 and z2=x1+x2−x3, then z1 and z2 are not independent although the number of variables is reduced from 3 to 2. 2.3 Sub-Matrix SVD (SMSVD) We propose a sub-matrix SVD method (SMSVD) to ensure the independence between zi’s. The basic idea is to partition columns of A into r sub-matrices, and then use SVD on each sub-matrix. The following is the algorithm: 1) Partition A into sub-matrices A1, A2, ..., Ar, where A=[A1, A2,..., Ar]. Ai is an m×qi matrix, i=1,..., r. 2) For i=1,..., r, compute the approximation matrix Ai of Ai through SVD in which the approximation rank is 1. Define SVD of Ai as Ui·Ai·Vi=diag(σi1,...,σip), then Ai=Ui(:,1)·σi1·Vi(:,1) and ║Ai−Ai║2=σi2. 3) Construct B=[U1(:,1)σ11, U2(:,1)σ21, ..., Ur(:,1)σr1],
منابع مشابه
Lower Dimensional Representation of Text Data in Vector
Dimension reduction in today's vector space based information retrieval system is essential for improving computational eeciency in handling massive data. In this paper, we propose a mathematical framework for lower dimensional representation of text data in vector space based information retrieval using minimization and matrix rank reduction formula. We illustrate how the commonly used Latent ...
متن کاملRegularized singular value decomposition: a sparse dimension reduction technique
Singular value decomposition (SVD) is a useful multivariate technique for dimension reduction. It has been successfully applied to analyze microarray data, where the eigen vectors are called eigen-genes/arrays. One weakness associated with the SVD is the interpretation. The eigen-genes are essentially linear combinations of all the genes. It is desirable to have sparse SVD, which retains the di...
متن کاملExperimental Analysis on Character Recognition using Singular Value Decomposition and Random Projection
Character recognition, a specific problem in the area of pattern recognition is a sub-process in most of the Optical Character Recognition (OCR) systems. Singular Value Decomposition (SVD) is one of the promising and efficient dimensionality reduction methods, which is already applied and proved in the area of character recognition. Random Projection (RP) is a recently evolved dimension reducti...
متن کاملFeature Extraction and Efficiency Comparison Using Dimension Reduction Methods in Sentiment Analysis Context
Nowadays, users can share their ideas and opinions with widespread access to the Internet and especially social networks. On the other hand, the analysis of people's feelings and ideas can play a significant role in the decision making of organizations and producers. Hence, sentiment analysis or opinion mining is an important field in natural language processing. One of the most common ways to ...
متن کاملFace Recognition Based Rank Reduction SVD Approach
Standard face recognition algorithms that use standard feature extraction techniques always suffer from image performance degradation. Recently, singular value decomposition and low-rank matrix are applied in many applications,including pattern recognition and feature extraction. The main objective of this research is to design an efficient face recognition approach by combining many tech...
متن کامل